-
Notifications
You must be signed in to change notification settings - Fork 0
Expand infrastructure modules with additional security groups #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request adds infrastructure modules for multiple AWS services including bastion hosts, CloudFront distribution, EKS cluster networking, RDS database, S3 buckets, and VPC with security groups. Changes span resource definitions, variable declarations, and output exports across nine Terraform files. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–75 minutes Areas requiring extra attention:
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modules/s3/main.tf (1)
1-131: Restore server-side encryption for both S3 buckets.The encryption resources were deliberately removed in this PR (verified via git diff), but this creates a security regression that contradicts the project's documented security best practices. The README explicitly states under both the "S3 Module" section and "Security Best Practices" that server-side encryption should be enabled by default. Additionally, no variables provide an alternative encryption configuration mechanism, and no inline comments document the rationale for this removal.
Both static_content and user_content buckets now lack enforced encryption at rest, introducing data exposure risks and potential compliance violations (GDPR, HIPAA, PCI-DSS).
Restore the encryption configuration:
resource "aws_s3_bucket_server_side_encryption_configuration" "static_content" { bucket = aws_s3_bucket.static_content.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } resource "aws_s3_bucket_server_side_encryption_configuration" "user_content" { bucket = aws_s3_bucket.user_content.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
modules/bastion/main.tf(2 hunks)modules/cloudfront/main.tf(1 hunks)modules/cloudfront/variables.tf(1 hunks)modules/eks/main.tf(1 hunks)modules/eks/variables.tf(1 hunks)modules/rds/main.tf(1 hunks)modules/rds/outputs.tf(1 hunks)modules/rds/variables.tf(1 hunks)modules/s3/main.tf(1 hunks)modules/vpc/main.tf(1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
modules/rds/main.tf
[medium] 37-67: Ensure RDS database has IAM authentication enabled
(CKV_AWS_161)
[medium] 37-67: Ensure that AWS database instances have deletion protection enabled
(CKV_AWS_293)
[high] 37-67: Ensure RDS Performance Insights are encrypted using KMS CMKs
(CKV_AWS_354)
[medium] 37-67: Ensure all data stored in RDS is not publicly accessible
(CKV_AWS_17)
🪛 Gitleaks (8.28.0)
modules/rds/main.tf
[high] 50-50: Identified a HashiCorp Terraform password field, risking unauthorized infrastructure configuration and security breaches.
(hashicorp-tf-password)
🔇 Additional comments (5)
modules/cloudfront/main.tf (1)
117-123: Tag additions look good, but verify variable type compatibility.The tags are well-structured and add useful metadata. However, the
CostCentertag referencesvar.cost_center, which has a type/value mismatch inmodules/cloudfront/variables.tf(see separate comment).modules/bastion/main.tf (1)
172-172: Good security practice: instance metadata tags enabled.Enabling instance metadata tags is a sensible hardening measure that allows EC2 instance profiles to reference tags without additional API calls.
modules/eks/variables.tf (1)
98-101: Variable declaration is correct.The
vpc_cidrvariable is properly typed and documented. Requiring it as a mandatory input is appropriate for VPC-level security configuration.modules/rds/outputs.tf (1)
1-19: Output definitions are clean and well-documented.The outputs properly expose the RDS instance and security group identifiers. However, review the underlying RDS resources in
modules/rds/main.tffor security concerns.modules/rds/variables.tf (1)
1-44: Verify completeness of RDS module variables.Before approving, please confirm the following by sharing or pointing to
modules/rds/main.tf:
- Are RDS engine type and version defined? If so, where (variables vs. hardcoded)?
- Are there any other sensitive values (e.g., database name, encryption keys) hardcoded in
main.tf?- Is the DB subnet group explicitly specified, or auto-created?
- Are backup retention and encryption-at-rest policies defined?
This ensures the variable set aligns with your RDS configuration and security posture.
| ingress { | ||
| description = "HTTP" | ||
| from_port = 80 | ||
| to_port = 80 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "HTTPS" | ||
| from_port = 443 | ||
| to_port = 443 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "MySQL/Aurora" | ||
| from_port = 3306 | ||
| to_port = 3306 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "PostgreSQL" | ||
| from_port = 5432 | ||
| to_port = 5432 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "Redis" | ||
| from_port = 6379 | ||
| to_port = 6379 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "Custom app port" | ||
| from_port = 8080 | ||
| to_port = 8080 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Bastion host exposes database and application ports to 0.0.0.0/0.
A bastion should serve as a jump host for SSH access only. Exposing database (3306, 5432), cache (6379), and application (8080) ports to the internet defeats the purpose of using a bastion and creates a security vulnerability.
These rules should be removed or restricted to specific internal subnets/security groups:
- ingress {
- description = "HTTP"
- from_port = 80
- to_port = 80
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- ingress {
- description = "HTTPS"
- from_port = 443
- to_port = 443
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- ingress {
- description = "MySQL/Aurora"
- from_port = 3306
- to_port = 3306
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- ingress {
- description = "PostgreSQL"
- from_port = 5432
- to_port = 5432
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- ingress {
- description = "Redis"
- from_port = 6379
- to_port = 6379
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- ingress {
- description = "Custom app port"
- from_port = 8080
- to_port = 8080
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ingress { | |
| description = "HTTP" | |
| from_port = 80 | |
| to_port = 80 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| description = "HTTPS" | |
| from_port = 443 | |
| to_port = 443 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| description = "MySQL/Aurora" | |
| from_port = 3306 | |
| to_port = 3306 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| description = "PostgreSQL" | |
| from_port = 5432 | |
| to_port = 5432 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| description = "Redis" | |
| from_port = 6379 | |
| to_port = 6379 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| description = "Custom app port" | |
| from_port = 8080 | |
| to_port = 8080 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } |
🤖 Prompt for AI Agents
In modules/bastion/main.tf around lines 29 to 75, the bastion security group
currently opens database (3306), postgres (5432), redis (6379), and app (8080)
ports to 0.0.0.0/0 which is insecure; limit the bastion to SSH-only access from
trusted IPs and remove or tighten the other ingress rules by either deleting
those ingress blocks or replacing their cidr_blocks with a restricted internal
CIDR or security-group-based rule that only allows access from
application/private subnets or specific DB/security group IDs; ensure only port
22 (ssh) remains open to external/trusted IPs and reference internal security
groups or VPC CIDR for any necessary inter-service access.
| variable "cost_center" { | ||
| description = "Cost center tag for billing" | ||
| type = number | ||
| default = "engineering" | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix type mismatch: variable type is number but default is string.
A number type variable cannot have a string default value. This will fail Terraform validation.
Apply this diff to fix:
variable "cost_center" {
description = "Cost center tag for billing"
- type = number
- default = "engineering"
+ type = string
+ default = "engineering"
}Alternatively, if cost_center should be numeric, use a number as the default:
variable "cost_center" {
description = "Cost center tag for billing"
type = number
- default = "engineering"
+ default = 1001
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| variable "cost_center" { | |
| description = "Cost center tag for billing" | |
| type = number | |
| default = "engineering" | |
| } | |
| variable "cost_center" { | |
| description = "Cost center tag for billing" | |
| type = string | |
| default = "engineering" | |
| } |
🤖 Prompt for AI Agents
In modules/cloudfront/variables.tf around lines 81 to 85, the variable
"cost_center" is declared with type = number but has a string default
"engineering", causing Terraform validation to fail; change the variable type to
string (type = string) so the default value matches, or if it should be numeric,
replace the default with a numeric value (e.g., 123) and adjust any consuming
code accordingly.
| resource "aws_security_group" "node" { | ||
| name = "${var.cluster_name}-node-sg" | ||
| description = "Security group for EKS nodes" | ||
| vpc_id = var.vpc_id | ||
|
|
||
| ingress { | ||
| description = "Kubelet API" | ||
| from_port = 10251 | ||
| to_port = 10251 | ||
| protocol = "tcp" | ||
| cidr_blocks = [var.vpc_cidr] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "SSH access" | ||
| from_port = 22 | ||
| to_port = 22 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| ingress { | ||
| description = "NodePort services" | ||
| from_port = 30000 | ||
| to_port = 32767 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| egress { | ||
| from_port = 0 | ||
| to_port = 0 | ||
| protocol = "-1" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| tags = { | ||
| Name = "${var.cluster_name}-node-sg" | ||
| Environment = var.environment | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: EKS node security group allows SSH from 0.0.0.0/0.
EKS nodes should not be directly accessible via SSH from the internet. This creates a significant attack surface. Nodes should be accessed via AWS Systems Manager Session Manager or through a bastion host.
Also, exposing the NodePort service range (30000-32767) to 0.0.0.0/0 allows direct access to Kubernetes services and should be restricted.
Apply this diff to restrict access:
resource "aws_security_group" "node" {
name = "${var.cluster_name}-node-sg"
description = "Security group for EKS nodes"
vpc_id = var.vpc_id
ingress {
description = "Kubelet API"
from_port = 10251
to_port = 10251
protocol = "tcp"
cidr_blocks = [var.vpc_cidr]
}
- ingress {
- description = "SSH access"
- from_port = 22
- to_port = 22
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
-
- ingress {
- description = "NodePort services"
- from_port = 30000
- to_port = 32767
- protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
- }
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.cluster_name}-node-sg"
Environment = var.environment
}
}If SSH access is needed for debugging, restrict to a bastion security group or specific admin IPs using source security group ID or CIDR.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_security_group" "node" { | |
| name = "${var.cluster_name}-node-sg" | |
| description = "Security group for EKS nodes" | |
| vpc_id = var.vpc_id | |
| ingress { | |
| description = "Kubelet API" | |
| from_port = 10251 | |
| to_port = 10251 | |
| protocol = "tcp" | |
| cidr_blocks = [var.vpc_cidr] | |
| } | |
| ingress { | |
| description = "SSH access" | |
| from_port = 22 | |
| to_port = 22 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| description = "NodePort services" | |
| from_port = 30000 | |
| to_port = 32767 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| egress { | |
| from_port = 0 | |
| to_port = 0 | |
| protocol = "-1" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| tags = { | |
| Name = "${var.cluster_name}-node-sg" | |
| Environment = var.environment | |
| } | |
| } | |
| resource "aws_security_group" "node" { | |
| name = "${var.cluster_name}-node-sg" | |
| description = "Security group for EKS nodes" | |
| vpc_id = var.vpc_id | |
| ingress { | |
| description = "Kubelet API" | |
| from_port = 10251 | |
| to_port = 10251 | |
| protocol = "tcp" | |
| cidr_blocks = [var.vpc_cidr] | |
| } | |
| egress { | |
| from_port = 0 | |
| to_port = 0 | |
| protocol = "-1" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| tags = { | |
| Name = "${var.cluster_name}-node-sg" | |
| Environment = var.environment | |
| } | |
| } |
🤖 Prompt for AI Agents
In modules/eks/main.tf around lines 49-89 the aws_security_group "node"
currently allows SSH (port 22) and the entire NodePort range (30000-32767) from
0.0.0.0/0; remove or tighten these wide-open ingress rules. Replace the SSH
ingress with either no SSH rule (rely on SSM Session Manager) or a rule that
allows only a bastion host security group (use security_groups or
source_security_group_id) or an explicit admin CIDR; similarly restrict the
NodePort range to internal-only access by using var.vpc_cidr or a specific
load-balancer/bastion SG instead of 0.0.0.0/0; keep the kubelet/cluster-internal
ports as needed and ensure tags/description reflect the tightened access.
| resource "aws_security_group" "rds" { | ||
| name = "${var.environment}-rds-sg" | ||
| description = "Security group for RDS" | ||
| vpc_id = var.vpc_id | ||
|
|
||
| ingress { | ||
| description = "PostgreSQL from anywhere" | ||
| from_port = 5432 | ||
| to_port = 5432 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| egress { | ||
| from_port = 0 | ||
| to_port = 0 | ||
| protocol = "-1" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| tags = { | ||
| Name = "${var.environment}-rds-sg" | ||
| Environment = var.environment | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: RDS security group allows PostgreSQL access from 0.0.0.0/0.
The RDS database should not be directly exposed to the internet. Combined with publicly_accessible = true on the database instance (line 54), this creates a critical attack vector.
Restrict the security group to only allow access from application subnets or security groups:
resource "aws_security_group" "rds" {
name = "${var.environment}-rds-sg"
description = "Security group for RDS"
vpc_id = var.vpc_id
ingress {
description = "PostgreSQL from anywhere"
+ description = "PostgreSQL from application tier"
from_port = 5432
to_port = 5432
protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
+ security_groups = [var.app_security_group_id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-rds-sg"
Environment = var.environment
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_security_group" "rds" { | |
| name = "${var.environment}-rds-sg" | |
| description = "Security group for RDS" | |
| vpc_id = var.vpc_id | |
| ingress { | |
| description = "PostgreSQL from anywhere" | |
| from_port = 5432 | |
| to_port = 5432 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| egress { | |
| from_port = 0 | |
| to_port = 0 | |
| protocol = "-1" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| tags = { | |
| Name = "${var.environment}-rds-sg" | |
| Environment = var.environment | |
| } | |
| } | |
| resource "aws_security_group" "rds" { | |
| name = "${var.environment}-rds-sg" | |
| description = "Security group for RDS" | |
| vpc_id = var.vpc_id | |
| ingress { | |
| description = "PostgreSQL from application tier" | |
| from_port = 5432 | |
| to_port = 5432 | |
| protocol = "tcp" | |
| security_groups = [var.app_security_group_id] | |
| } | |
| egress { | |
| from_port = 0 | |
| to_port = 0 | |
| protocol = "-1" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| tags = { | |
| Name = "${var.environment}-rds-sg" | |
| Environment = var.environment | |
| } | |
| } |
| resource "aws_db_instance" "postgres" { | ||
| identifier = "${var.environment}-postgres" | ||
| engine = "postgres" | ||
| engine_version = "14.7" | ||
| instance_class = var.instance_class | ||
|
|
||
| allocated_storage = var.allocated_storage | ||
| max_allocated_storage = var.max_allocated_storage | ||
| storage_type = "gp3" | ||
| storage_encrypted = false | ||
|
|
||
| db_name = var.database_name | ||
| username = var.master_username | ||
| password = "admin123" | ||
|
|
||
| vpc_security_group_ids = [aws_security_group.rds.id] | ||
| db_subnet_group_name = aws_db_subnet_group.main.name | ||
| publicly_accessible = true | ||
|
|
||
| backup_retention_period = 0 | ||
| skip_final_snapshot = true | ||
| deletion_protection = false | ||
|
|
||
| performance_insights_enabled = false | ||
| enabled_cloudwatch_logs_exports = [] | ||
|
|
||
| tags = { | ||
| Name = "${var.environment}-postgres" | ||
| Environment = var.environment | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Hardcoded database password in source code.
The password "admin123" is hardcoded in line 50. This is a severe security vulnerability that exposes database credentials to anyone with repository access. Additionally, storage encryption is disabled (line 46), the database is publicly accessible (line 54), and multiple security best practices are missing.
Apply this comprehensive security hardening:
resource "aws_db_instance" "postgres" {
identifier = "${var.environment}-postgres"
engine = "postgres"
engine_version = "14.7"
instance_class = var.instance_class
allocated_storage = var.allocated_storage
max_allocated_storage = var.max_allocated_storage
storage_type = "gp3"
- storage_encrypted = false
+ storage_encrypted = true
+ kms_key_id = var.db_kms_key_id
db_name = var.database_name
username = var.master_username
- password = "admin123"
+ password = var.master_password
- vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.main.name
- publicly_accessible = true
+ publicly_accessible = false
- backup_retention_period = 0
- skip_final_snapshot = true
- deletion_protection = false
+ backup_retention_period = 30
+ skip_final_snapshot = false
+ final_snapshot_identifier = "${var.environment}-postgres-final-snapshot-${formatdate("YYYY-MM-DD-hhmm", timestamp())}"
+ deletion_protection = true
- performance_insights_enabled = false
- enabled_cloudwatch_logs_exports = []
+ performance_insights_enabled = true
+ performance_insights_retention_period = 31
+ performance_insights_kms_key_id = var.insights_kms_key_id
+ enabled_cloudwatch_logs_exports = ["postgresql"]
+
+ iam_database_authentication_enabled = true
tags = {
Name = "${var.environment}-postgres"
Environment = var.environment
}
}Actions required:
- Use AWS Secrets Manager or similar to manage the database password variable
- Add required variables to
modules/rds/variables.tf:master_password,db_kms_key_id,insights_kms_key_id - Enable IAM database authentication for credential-less access from EKS pods
- Consider updating the RDS security group to use security group IDs instead of CIDR blocks (see separate comment)
This aligns with AWS security best practices and the CKV_AWS checks flagged by Checkov.
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Checkov (3.2.334)
[medium] 37-67: Ensure RDS database has IAM authentication enabled
(CKV_AWS_161)
[medium] 37-67: Ensure that AWS database instances have deletion protection enabled
(CKV_AWS_293)
[high] 37-67: Ensure RDS Performance Insights are encrypted using KMS CMKs
(CKV_AWS_354)
[medium] 37-67: Ensure all data stored in RDS is not publicly accessible
(CKV_AWS_17)
🪛 Gitleaks (8.28.0)
[high] 50-50: Identified a HashiCorp Terraform password field, risking unauthorized infrastructure configuration and security breaches.
(hashicorp-tf-password)
🤖 Prompt for AI Agents
modules/rds/main.tf lines 37-67: Remove the hardcoded password and implement
secrets-based password management (e.g., reference a Secrets Manager secret or
accept a var.master_password which is populated from Secrets Manager at deploy
time) instead of "admin123"; add new variables in modules/rds/variables.tf for
master_password, db_kms_key_id, and insights_kms_key_id; enable
storage_encrypted = true and set kms_key_id = var.db_kms_key_id; set
publicly_accessible = false; enable iam_database_authentication = true; enable
performance_insights and configure performance_insights_kms_key_id =
var.insights_kms_key_id (and set performance_insights_enabled = true if
required); remove any plaintext secrets from the TF file and ensure the calling
module/terraform root retrieves the secret from AWS Secrets Manager (or other
secret backend) and passes it into the module via the new master_password
variable; also verify the RDS security group is referenced by ID (not open CIDR)
in vpc_security_group_ids as part of the separate SG comment.
| performance_insights_enabled = false | ||
| enabled_cloudwatch_logs_exports = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enable RDS Performance Insights and logging for observability.
Performance Insights should be enabled (with KMS encryption) and CloudWatch logs should capture PostgreSQL activity for debugging and security auditing.
Enable Performance Insights and logging:
- performance_insights_enabled = false
- enabled_cloudwatch_logs_exports = []
+ performance_insights_enabled = true
+ performance_insights_retention_period = 31
+ performance_insights_kms_key_id = var.insights_kms_key_id
+ enabled_cloudwatch_logs_exports = ["postgresql"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| performance_insights_enabled = false | |
| enabled_cloudwatch_logs_exports = [] | |
| performance_insights_enabled = true | |
| performance_insights_retention_period = 31 | |
| performance_insights_kms_key_id = var.insights_kms_key_id | |
| enabled_cloudwatch_logs_exports = ["postgresql"] |
🤖 Prompt for AI Agents
In modules/rds/main.tf around lines 60-61, performance_insights is currently
disabled and no CloudWatch logs are exported; change
performance_insights_enabled to true, add performance_insights_kms_key_id
referencing an existing or newly-created KMS key for encryption, and set
enabled_cloudwatch_logs_exports to include PostgreSQL log streams (e.g.
"postgresql" and "error" or other DB-specific exports you use). If there isn’t
an RDS KMS key resource in the module, create one (or accept it via a module
input variable) and ensure the DB instance references that key; also verify
IAM/monitoring permissions and any module inputs/outputs are updated
accordingly.
| variable "environment" { | ||
| description = "Environment name" | ||
| type = string | ||
| } | ||
|
|
||
| variable "vpc_id" { | ||
| description = "VPC ID" | ||
| type = string | ||
| } | ||
|
|
||
| variable "subnet_ids" { | ||
| description = "List of subnet IDs for RDS" | ||
| type = list(string) | ||
| } | ||
|
|
||
| variable "instance_class" { | ||
| description = "RDS instance class" | ||
| type = string | ||
| default = "db.t3.micro" | ||
| } | ||
|
|
||
| variable "allocated_storage" { | ||
| description = "Allocated storage in GB" | ||
| type = number | ||
| default = 20 | ||
| } | ||
|
|
||
| variable "max_allocated_storage" { | ||
| description = "Maximum allocated storage for autoscaling" | ||
| type = number | ||
| default = 100 | ||
| } | ||
|
|
||
| variable "database_name" { | ||
| description = "Name of the database" | ||
| type = string | ||
| default = "mydb" | ||
| } | ||
|
|
||
| variable "master_username" { | ||
| description = "Master username" | ||
| type = string | ||
| default = "admin" | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Critical: Missing master_password variable.
The RDS module variables are incomplete—the master_password variable is missing. RDS instances require database authentication, and the password must be supplied as a Terraform variable, never hardcoded or left with a default value. This is a security requirement.
Add the following variable block:
+variable "master_password" {
+ description = "Master database password"
+ type = string
+ sensitive = true
+ # No default - passwords should never have defaults
+}Additionally, verify that master_password is not hardcoded anywhere in modules/rds/main.tf. If it is, that must be remediated immediately as a security violation.
I'll verify the review comment by examining the RDS module files to confirm whether the concerns are valid.
---
🏁 Script executed:
```shell
#!/bin/bash
# List all files in the rds module
echo "=== Files in modules/rds ==="
find modules/rds -type f 2>/dev/null | head -20
echo -e "\n=== Full content of variables.tf ==="
cat modules/rds/variables.tf 2>/dev/null || echo "File not found"
echo -e "\n=== Searching for master_password references ==="
rg -n "master_password" modules/rds/ 2>/dev/null || echo "No matches found"
echo -e "\n=== Checking main.tf for hardcoded passwords ==="
cat modules/rds/main.tf 2>/dev/null || echo "main.tf not found"
Length of output: 2729
Critical security violations found: Hardcoded password in main.tf and multiple misconfigurations.
The missing master_password variable is confirmed, but the actual critical issue is more severe: modules/rds/main.tf contains a hardcoded password password = "admin123" in the aws_db_instance resource (line 23). This is a security violation that must be remediated immediately.
Required fixes:
-
Add
master_passwordvariable tomodules/rds/variables.tf:+variable "master_password" { + description = "Master database password" + type = string + sensitive = true +}
-
Replace hardcoded password in
modules/rds/main.tf:- password = "admin123" + password = var.master_password
-
Address additional security concerns in
modules/rds/main.tf:storage_encrypted = false→ should betruepublicly_accessible = true→ should befalse(unless explicitly required)backup_retention_period = 0→ should have appropriate retention- Security group allows PostgreSQL from
0.0.0.0/0→ restrict to specific CIDRs deletion_protection = false→ consider setting totrue
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| variable "environment" { | |
| description = "Environment name" | |
| type = string | |
| } | |
| variable "vpc_id" { | |
| description = "VPC ID" | |
| type = string | |
| } | |
| variable "subnet_ids" { | |
| description = "List of subnet IDs for RDS" | |
| type = list(string) | |
| } | |
| variable "instance_class" { | |
| description = "RDS instance class" | |
| type = string | |
| default = "db.t3.micro" | |
| } | |
| variable "allocated_storage" { | |
| description = "Allocated storage in GB" | |
| type = number | |
| default = 20 | |
| } | |
| variable "max_allocated_storage" { | |
| description = "Maximum allocated storage for autoscaling" | |
| type = number | |
| default = 100 | |
| } | |
| variable "database_name" { | |
| description = "Name of the database" | |
| type = string | |
| default = "mydb" | |
| } | |
| variable "master_username" { | |
| description = "Master username" | |
| type = string | |
| default = "admin" | |
| } | |
| variable "environment" { | |
| description = "Environment name" | |
| type = string | |
| } | |
| variable "vpc_id" { | |
| description = "VPC ID" | |
| type = string | |
| } | |
| variable "subnet_ids" { | |
| description = "List of subnet IDs for RDS" | |
| type = list(string) | |
| } | |
| variable "instance_class" { | |
| description = "RDS instance class" | |
| type = string | |
| default = "db.t3.micro" | |
| } | |
| variable "allocated_storage" { | |
| description = "Allocated storage in GB" | |
| type = number | |
| default = 20 | |
| } | |
| variable "max_allocated_storage" { | |
| description = "Maximum allocated storage for autoscaling" | |
| type = number | |
| default = 100 | |
| } | |
| variable "database_name" { | |
| description = "Name of the database" | |
| type = string | |
| default = "mydb" | |
| } | |
| variable "master_username" { | |
| description = "Master username" | |
| type = string | |
| default = "admin" | |
| } | |
| variable "master_password" { | |
| description = "Master database password" | |
| type = string | |
| sensitive = true | |
| } |
🤖 Prompt for AI Agents
In modules/rds/variables.tf lines 1-44: add a new sensitive string variable
named master_password (no default) and document it, then update
modules/rds/main.tf to stop using the hardcoded password ("admin123") by
referencing this new variable; additionally, in modules/rds/main.tf change
storage_encrypted to true, set publicly_accessible to false (unless a documented
exception is required), set a non-zero backup_retention_period appropriate for
your SLA, enable or recommend deletion_protection=true, and tighten the security
group rule that currently allows PostgreSQL from 0.0.0.0/0 to restrict allowed
CIDRs or reference a variable for allowed_cidrs. Ensure the new master_password
variable is marked sensitive and required so credentials are not stored in
source.
| variable "master_username" { | ||
| description = "Master username" | ||
| type = string | ||
| default = "admin" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid predictable default for master_username.
Using "admin" as the default master username is not a security best practice—it's a commonly guessed/known username. Consider either requiring this as a mandatory input (remove the default) or using a less obvious default value.
Apply this diff to remove the default:
variable "master_username" {
description = "Master username"
type = string
- default = "admin"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| variable "master_username" { | |
| description = "Master username" | |
| type = string | |
| default = "admin" | |
| variable "master_username" { | |
| description = "Master username" | |
| type = string | |
| } |
🤖 Prompt for AI Agents
In modules/rds/variables.tf around lines 40 to 43, the variable
"master_username" currently defaults to "admin", which is predictable; remove
the default to make the variable required (delete the default line) or replace
it with a less obvious placeholder if you must keep a default, and update any
callers/modules/README that set or expect this variable so they supply a secure
username when instantiating the module.
| block_public_acls = false | ||
| block_public_policy = false | ||
| ignore_public_acls = false | ||
| restrict_public_buckets = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Critical: Disabling public access blocks creates uncontrolled public exposure.
All four Block Public Access settings are set to false, which removes all guardrails preventing public access to the static_content bucket. Combined with the bucket policy allowing CloudFront OAI, this configuration enables arbitrary public read/write access unless tightly controlled by bucket ACLs or additional policies.
This is a significant security risk if unintentional. If this is intentional for public CDN content, it must be explicitly documented with business justification and monitoring.
Please confirm:
- Is disabling public access blocks intentional? If yes, provide business justification.
- Are there additional access controls (bucket ACLs, restrictive object policies) that compensate for this?
- Is there monitoring/alerting in place for unauthorized modifications to this bucket?
I need to verify the concerns in the review comment by inspecting the actual codebase. Let me gather the necessary information.
🏁 Script executed:
#!/bin/bash
# Inspect the full S3 configuration file to understand context
echo "=== Full modules/s3/main.tf content ==="
cat modules/s3/main.tf
echo -e "\n=== Git diff to see what changed ==="
git diff HEAD modules/s3/main.tf || echo "Git diff not available"
echo -e "\n=== Search for encryption resources ==="
rg -n "aws_s3_bucket_server_side_encryption_configuration|kms_master_key_id" modules/s3/Length of output: 3348
🌐 Web query:
S3 public access block false security best practices CloudFront CDN
💡 Result:
Concise best-practices when S3 “Block Public Access” is false but you serve via CloudFront:
- Use CloudFront Origin Access Control (OAC) — not public buckets — and configure an S3 bucket policy that only allows the CloudFront distribution (via cloudfront.amazonaws.com + AWS:SourceArn condition). [1]. (docs.aws.amazon.com)
- Set S3 Object Ownership = bucket-owner-enforced (disable ACLs) so only bucket policies control access. [1]. (docs.aws.amazon.com)
- If possible, re-enable S3 Block Public Access; otherwise add an explicit bucket policy that DENIES public access and only ALLOWs CloudFront (use deny-by-default + allow-for-CloudFront statement). [2][3]. (docs.aws.amazon.com)
- Require HTTPS between viewers and CloudFront and between CloudFront and S3 (HTTPS only / redirect). Enforce minimum TLS versions on CloudFront. (docs.aws.amazon.com)
- Use signed URLs/cookies or OAC + IAM conditions for private content; scope CloudFront cache behaviors and restrict paths as needed. (docs.aws.amazon.com)
- If S3 objects use SSE-KMS, add the CloudFront service principal to the KMS key policy with a SourceArn condition for the distribution. [1]. (docs.aws.amazon.com)
- Add edge protections: AWS WAF (rate limit, IP sets), Origin Shield or regional caching to reduce origin exposure, and monitoring/alerting on S3 and CloudFront access logs. (docs.aws.amazon.com)
- Automate governance: enforce OAC/OAI and non-public origin rules via AWS Config / Control Tower / Infrastructure-as-code and CI checks. (docs.aws.amazon.com)
If you want, I can:
- produce a sample S3 bucket policy that only allows a given CloudFront distribution (with placeholders), or
- validate your existing bucket policy and CloudFront settings (paste them here).
Citations:
- 1: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html?utm_source=openai
- 2: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html?utm_source=openai
- 3: https://docs.aws.amazon.com/zh_cn/whitepapers/latest/secure-content-delivery-amazon-cloudfront/s3-origin-with-cloudfront.html?utm_source=openai
- 4: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-origin.html?utm_source=openai
- 5: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html?utm_source=openai
- 6: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html?utm_source=openai
- 7: https://docs.aws.amazon.com/securityhub/latest/userguide/cloudfront-controls.html?utm_source=openai
- 8: https://docs.aws.amazon.com/controltower/latest/controlreference/cloudfront-rules.html?utm_source=openai
Add missing server-side encryption and upgrade security controls for CDN-served S3 bucket.
The bucket policy correctly restricts access to CloudFront OAI only, so direct public access is not uncontrolled. However, the static_content bucket is missing critical security hardening:
- Missing server-side encryption: No
aws_s3_bucket_server_side_encryption_configurationdefined. All S3 objects should be encrypted. - Outdated OAI pattern: Using
aws_cloudfront_origin_access_identityinstead of the recommendedaws_cloudfront_origin_access_control(OAC). OAC is the newer standard with better IAM integration. - No Object Ownership control: Add
aws_s3_bucket_object_ownershipwithrule.object_ownership = "BucketOwnerEnforced"to prevent ACL-based bypasses. - No explicit deny policy: Add an explicit deny-by-default statement in the bucket policy for defense-in-depth.
Per AWS best practices, when disabling Block Public Access, compensating controls are required.
🤖 Prompt for AI Agents
In modules/s3/main.tf around lines 22-25 the bucket lacks server-side
encryption, object ownership enforcement, an explicit deny-by-default policy,
and uses an outdated CloudFront OAI pattern; add an
aws_s3_bucket_server_side_encryption_configuration resource to enforce AES256 or
aws:kms encryption for all objects, add an aws_s3_bucket_object_ownership
resource with rule.object_ownership = "BucketOwnerEnforced", extend the bucket
policy to include an explicit deny statement that denies any Get/Put/Delete
actions not coming from the CloudFront Origin Access Control principal, and
update any CloudFront-related resources to use
aws_cloudfront_origin_access_control (OAC) instead of
aws_cloudfront_origin_access_identity and reference the OAC principal in the
bucket policy; ensure Terraform references and IAM ARNs are updated accordingly
and run terraform fmt/validate.
| resource "aws_default_security_group" "default" { | ||
| vpc_id = aws_vpc.main.id | ||
|
|
||
| ingress { | ||
| description = "Allow all inbound" | ||
| from_port = 0 | ||
| to_port = 0 | ||
| protocol = "-1" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| egress { | ||
| from_port = 0 | ||
| to_port = 0 | ||
| protocol = "-1" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| } | ||
|
|
||
| tags = { | ||
| Name = "${var.environment}-default-sg" | ||
| Environment = var.environment | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Default security group allows all inbound traffic from 0.0.0.0/0.
This configuration enables unrestricted ingress to all resources using the default SG. This violates AWS security best practices and the principle of least privilege.
Consider restricting ingress to specific protocols and source IPs:
resource "aws_default_security_group" "default" {
vpc_id = aws_vpc.main.id
ingress {
- description = "Allow all inbound"
- from_port = 0
- to_port = 0
- protocol = "-1"
- cidr_blocks = ["0.0.0.0/0"]
+ description = "Allow intra-VPC communication"
+ from_port = 0
+ to_port = 0
+ protocol = "-1"
+ cidr_blocks = [var.vpc_cidr]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-default-sg"
Environment = var.environment
}
}Alternatively, consider removing this resource entirely and relying on explicit, module-specific security groups with narrowly scoped rules.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In modules/vpc/main.tf around lines 20 to 42 the aws_default_security_group
currently permits unrestricted ingress from 0.0.0.0/0 which violates
least-privilege; replace this open rule by either removing the permissive
ingress block entirely (rely on explicit per-module security groups) or tighten
it to only allow required protocols/ports from specific CIDR blocks or
referenced security groups (e.g., limit SSH/RDP to admin CIDRs, app ports to
load balancer SGs, and use aws_security_group_rule with aws_security_group ids)
and update tags/variables accordingly to reflect the narrowed scope.
Summary by CodeRabbit
New Features
Configuration